12 research outputs found

    TRANSITIONING TO AN ALTERNATIVE ASSESSMENT: COMPUTER-BASED TESTING AND KEY FACTORS RELATED TO TESTING MODE

    Get PDF
    Computer-Based Testing (CBT) is becoming widespread due to its many identified positive merits including productive item development, flexible delivery testing mode, existence of self-selection options for test takers, immediate feedback, results management, standard setting and so on. Transitioning to CBT raised the concern over the effects of testing administration mode on test takers’ scores compared to Paper-and-Pencil-Based testing. In this comparability study, we compared the effects of two different media (CBT vs. PPT) by investigating the score comparability of General English test taken by Iranian graduate students studying at Chabahar Maritime University to see whether test scores obtained from two testing modes were different. To achieve this goal, two versions of the same test were administered to 100 intermediate-level test takers organized in one testing group in two separate testing occasions. Using paired sample t-test to compare the means, the findings revealed the priority of CBT over PPT with .01 degree of difference at p<05. Utilizing ANOVA, the results indicated that two prior computer familiarity and attitudes external moderator factors had no significant effect on test takers’ CBT scores. Furthermore, according to the results, the greatest percentage of test takers preferred test features presented on computerized version of the test.  Article visualizations

    COMPUTER ADAPTIVE TESTING (CAT) DESIGN; TESTING ALGORITHM AND ADMINISTRATION MODE INVESTIGATION

    Get PDF
    Since the advent of technology to transform education, the use of computer technology has pervaded many areas of fields of study such as language learning and testing. Chapelle (2010) distinguishes three main motives for using technology in language testing: efficiency, equivalence and innovation. Computer as a frequently used technological tool has been widely inspected in the field of language assessment and testing. Compute-adaptive language test (CALT) is a subtype and subtest of computer-assisted language test because it is administered at computer terminal or on personal computer. The issue that currently needs more attention and prompt investigation of researchers is to study the testing mode and paradigm effects on comparability and equivalency of the data obtained from two modes of presentation, i.e. traditional paper-and-pencil (PPT) and computerized tests. To establish comparability and equivalency of computerized test with its paper-and-pencil counterpart is of importance and critical. Then, in this study, the researcher indicate that in order to replace computer-adaptive test with conventional paper-and-pencil one, we need to prove that these two versions of test are comparable, in other words the validity and reliability of computerized counterpart are not violated.  Article visualizations

    Comparability of Computer-Based Testing and Paper-Based Testing: Testing Mode Effect, Testing Mode Order, Computer Attitudes and Testing Mode preference

    Get PDF
    With promulgation of computer technology in educational testing, computerized testing (henceforth CBT) as green computing strategy is gaining popularity due to its advantages such as effective administration, flexible scheduling and immediate feedback over its conventional paper-based testing (henceforth PBT). Since some testing programs have begun to offer both versions of a test simultaneously, the effectiveness of CBT is queried by some scholars. Regarding to this aim, this study investigated the score equivalency of a test taken by 228 Iranian undergraduate students studying at a state university located in Chabahar region of Iran to see whether scores of two administrations of testing mode were equivalent. Then, two versions of the test were administered to the participants of two testing groups on four testing occasions in a counter balanced administration sequence with four weeks interval. One-Way ANOVA and Pearson Correlation tests were used to compare the mean scores and to find the relationship of testing order, computer attitudes and testing mode preference with testing performance. Findings of the study revealed that the scores of test takers were not different in both modes and the moderator variables were not considered external factors that might affect students’ performance on CBT.

    A Comparative Study of the Government and Private Sectors’ Effectiveness in ELT Program: A Case of Iranian Intermediate EFL Learners’ Oral Proficiency Examination

    Get PDF
    Learning context in which learners learn language skills, especially oral proficiency, is very crucial factor in an English Language Teaching (henceforth ELT) program. In fact, a context of language learning in which communicative principles are provided can be a great help for learners to use the language communicatively in real situations. An ideal learning context requires a friendly environment to provide enough exposure to the language input. Iranian students learn English both in government high schools (henceforth GHS) and private language institutes (henceforth PLI). Two different educational systems with their own special features are applied in two GHS and PLI contexts. Therefore, this study investigated the comparability of two systems regarding their effectiveness difference on speaking performance of the students. In addition to the direct observations and two-stage interviews, a TOEFL speaking test was taken by 220 students of two contexts in Behshahr city located at Mazandaran province. Then, the correlation of 8 internal and external moderator-factors with speaking performance was examined. The results of the independent t-test indicated that there was a statistically significant difference between the speaking performances of the learners of two contexts. Furthermore, Pearson Correlation revealed that some variables might have effect on speaking performance of language learners

    Computer-Based Testing: Score Equivalence and Testing Administration Mode Preference in a Comparative Evaluation Study

    No full text
    The empirical evidences show that two identical Computer-Based Testing (henceforth CBT) and Paper-and-Pencil-Based Testing (henceforth PBT) do not always result in the same scores. Such conclusions are referred to as “the effect of testing administration mode” or “testing mode effect”. Moderators such as individual differences (e.g., prior computer experience or computer attitude) have been investigated [4] to see if they influence test takers’ performance. The Guidelines for Computer-Based Tests and Interpretations [1] recommended eliminating the possible effects of some moderator variables on test takers performance. This research was conducted to provide the required empirical evidences on the existence of distinctive effects caused by changing administration mode from conventional PBT to modern CBT. The relationship between testing mode preference on test takers’ CBT performance was also examined. Two equivalent tests and two questionnaires were used. Using descriptive statistics and ANOVA, the findings demonstrated that two CBT and PBT sets of scores were comparable. Additionally, prior testing mode preference and gender had no significant effect on test takers’ CBT score, and they were not considered the variables that might affect the performance on CBT

    An Introduction to the Ambiguity Tolerance: As a Source of Variation in English-Persian Translation

    No full text
    Different individuals provide different translations of different qualities of the same text. This may be due to one’s dominant cognitive style and individuals’ particular personal characteristics (Khoshsima Hashemi Toroujeni, 2017) in general or ambiguity tolerance in particular. A certain degree of ambiguity tolerance (henceforth AI) has been found to facilitate language learning (Chapelle, 1983; Ehrman, 1999; Ely, 1995). However, this influential factor has been largely overlooked in translation studies. The purpose of this study was to find the relationship between AT and translation quality by identifying the expected positive correlation between the level of AT and the numbers of translation errors. Out of the 56 undergraduates of English-Persian Translation at Chabahar Maritime University (CMU), a sample of 34 top students was selected based on their scores on the reading comprehension which enjoys a special focus in many contexts (Khoshsima Rezaeian Tiyar, 2014) and structure subtests of the TOEFL. The participants responded to the SLTAS questionnaire for AT developed by Ely (1995). The questionnaire had a high alpha internal consistency reliability of .84 and standardized item alpha of .84. In the next stage of the research, the participants translated a short passage of contemporary English into Persian, which was assessed using the SICAL III scale for TQA developed and used by Canadian Government’s Translation Bureau as its official TQA model (Williams, 1989).  Then, to find the relationship between the level of ambiguity tolerance in undergraduates of English-Persian translation at Chabahar Maritime University and their translation quality, analysis of the collected data revealed a significant positive correlation (r=440, p.05) between the participants’ degree of AT and the numbers of errors in their translations. Controlling for SL proficiency, the correlation was still significantly positive (r=.397, p.05). Accordingly, it was concluded that the more intolerant of ambiguity a person is, the more errors s/he is likely to make while translating; conversely, the more tolerant of ambiguity a person is, the higher the quality of his/her translation will be. Therefore as expected, analysis of the data revealed a positive correlation throughout the sample between ambiguity intolerance and translation quality.

    Score Equivalence, Gender Difference, and Testing Mode Preference in a Comparative Study between Computer- Based Testing and Paper-Based Testing

    No full text
    Abstract—Score equivalency of two Computer-Based Testing (henceforth CBT) and Paper-and-Pencil-Based Testing (henceforth PBT) versions has turned into a controversial issue during the last decade in Iran. The comparability of mean scores obtained from two CBT and PBT formats of test should be investigated to see if test takers’ testing performance is influenced by the effects of testing administration mode. This research was conducted to examine score equivalency across modes as well as the relationship of gender, and testing mode preference with test takers’ performance on computerized testing. The information of testing mode preference and attitudes towards CBT and its features was supported by a focus group interview. Findings indicated that the scores of test takers were not different in both modes and there was no statistically significant relationship between moderator above variables and CBT performance
    corecore